-> *This is based on a [tweet](https://twitter.com/kcolbin/status/1237911340372488193?s=19) by [Kaila Colbin](https://twitter.com/kcolbin) about flattening the curve early in the [[COVID-19]] pandemic.*
This is about the effect where averting a predicted negative scenario through positive action might also diminish the perceived value of the resulting positive scenario:
a) A negative scenario will happen if no positive action is taken to avoid it.
b) The causes for this effect are identified and understood with a fair amount of certainty.
c) The causes are mitigated through positive action and the negative scenario is avoided, leading to a more positive outcome scenario.
However in some cases we can observe that after taking the positive action, the resulting scenario might disconnect itself from the original necessity for change, leading to a diminished perception of value for the new reality once its achieved.
In other words: The success of a mitigation effort might lead to the perception that the positive action was taken in isolation and not to prevent a negative scenario (diminished relationship).
Anatomy of the effect, with examples based on [[#Example Flattening the COVID-19 curve]]:
* It is largely agreed that action A1 will lead to scenario S1, which is undesirable: “*If we ignore the COVID-19 virus (A1) , hundreds of thousands of people will likely die (S1)*”
* This can be prevented by taking action A2, instead leading to scenario S2, which is preferable to S1: “*We can socially distance (A2) to flatten the curve and only thousands will die (S2)*”
* A2 is now seen as the cost of avoiding S1: “*To avoid mass death we need to socially distance*”
* Action A2 is taken, S1 is avoided and instead a version of S2 is achieved: “*We have socially distanced and only thousands of people died, but not hundreds of thousands*”
* In public perception however, A2 might now be seen as the price for achieving S2, ignoring A1 and S1 since neither became reality: “*We socially distanced and thousands of people died*”
* This might lead to a perceived diminished value of A2 and S2: “*We did socially distance, but thousands of people died*”
The effect might be a form of [recency bias](https://en.wikipedia.org/wiki/Recency_bias) where people put greater importance on the recent chain of events (positive scenario achieved through mitigation, S2) over the original scenario (predicted but averted negative scenario, S1).
The effect increases the more S2 aligns to the current status quo. In case of [[#Y2K]] the desired scenario was "nothing happens (S2)" as an alternative to "command & control computers crash, leading to potentially catastrophic outages of essential systems (S1)". The cost was billions of dollars (A2). This led to the perception of: "*We paid billions for nothing*".
## Extreme: Diminishing Causality
In the extreme cases the value of preventing S1 is diminished completely, disconnecting A2 and S2 from the initial premise. So after averting the negative scenario through positive action, the results might be perceived as a single cause and effect without dependency on the original negative scenario.
In other words: The success of a mitigation effort might lead to the assumption that the effort itself was not required in the first place.
Extending the Y2k example above: "*Did we even need to spend billions of dollars when nothing even happened?*"
## Related: Self-defeating prophecies
These are predictions that prevent their core prediction from happening (as opposed to a self-fulfilling prophecy). In the context of [[Futures Exploration]], these can be for example warnings to inspire action or inaction to avoid a predicted negative scenario. If action is taken to prevent the predicted negative outcome, then it is a self-defeating prophecy.
Simple example:
* Scenario: "Millions will die as a direct result of global warming."
* Potential result: "People working actively to prevent this scenario."
Self-defeating prophecies can lead to diminished perceived value of change, assuming that the audience perception of the achieved outcome changes after the initial scenario is avoided. Some examples on the [Wikipedia page](https://en.m.wikipedia.org/wiki/Self-defeating_prophecy) do actually overlap, for example COVID-19 predictions and Y2k skepticism.
## Example: Flattening the COVID-19 curve
[Kaila Colbin](https://twitter.com/kcolbin) wrote an [excellent tweet](https://twitter.com/kcolbin/status/1237911340372488193?s=19) about flattening the curve early in the [[COVID-19]] pandemic:
> Here is the thing to understand about flattening the curve.
> It only works if we take necessary measures before they seem necessary.
> And if it works, people will think we over-reacted.
> We have to be willing to look like we over-reacted.
This anticipates the perceived diminished value and prepares the decision makers and participating population for the effect.
This observation is related to the second assumption in the frame of reference: “*The causes for the negative scenario are identified and understood with a fair amount of certainty.*” If there is doubt around the causality, then the entire negative scenario can be doubted as well, leading to diminished trust in urgency (“Did we need to do it *now*?”) or necessity (“Did we need to do it *at all*?”).
But even if there is consensus within a group of experts, the causality might be so complex that it is not obvious to the group that needs to take the preventive action. Sometimes this disconnects A2 and S2 from the initial premise even before the preventive measures are taken.
In the beginning of the COVID-19 crisis (February to May 2020) experts were not sure how effective different measures were to stop the spread of the virus. Example: How effective are masks in stopping the spread? How about cloth masks vs. surgical masks vs. FFP2 masks? Assumptions were made based on previous corona virus strands, which in some cases turned out to be misleading or wrong and had to be corrected as more information about the new virus became available.
Reacting to change and updating scenario predictions accordingly is not a bad thing, but the suboptimal way this was communicated led to mistrust: “*Are we really sure the causes and effects have been correctly identified?*” And thus it left room to doubt the predicted negative scenario: “*Are we really sure hundreds of thousands of people will die?*”
## Example: Y2K
The [year 2000 problem](https://en.wikipedia.org/wiki/Year_2000_problem) is seen as a self-defeating prophecy because fear of massive technology failures encouraged the changes needed to avoid those failures. So the desired scenario was “nothing happens (S2)” as an alternative to “command & control computers crash, leading to potentially catastrophic outages of essential systems (S1)”. The cost was billions of dollars (A2).
This led to the perception of: “*We paid billions for nothing*”. There was nothing tangible to point at and say: “See? We achieved that” and there was no reinforced learning by actually experiencing the predicted negative impact as it was avoided (almost) completely.
In extreme cases the value of preventing S1 can diminish completely, disconnecting A2 and S2 from the initial premise. So after averting the negative scenario through positive action, the results might be perceived as a single cause and effect without dependency on the original negative scenario.
This is the effect that [Kaila Colbins](https://twitter.com/kcolbin) [tweet](https://twitter.com/kcolbin/status/1237911340372488193) describes in the beginning: The success of a mitigation effort might lead to the perception that the effort itself was not required in the first place, leading to uncertainty and doubt: “*Did we even need to spend billions of dollars when nothing even happened?*”
[20 Years Later, the Y2K bug seems like a joke. That's because those behind the scenes then took it seriously | Time](https://time.com/5752129/y2k-bug-history/)
> “The Y2K crisis didn’t happen precisely because people started preparing for it over a decade in advance. And the general public who was busy stocking up on supplies and stuff just didn’t have a sense that the programmers were on the job,” says Paul Saffo, a futurist and adjunct professor at Stanford University.
## Articles & posts
[The Self-Defeating Prophecy (and How it Works)](https://unintendedconsequenc.es/the-self-defeating-prophecy/)
[Causal Inference using Difference in Differences, Causal Impact, and Synthetic Control | by Prasanna Rajendran | Towards Data Science](https://towardsdatascience.com/causal-inference-using-difference-in-differences-causal-impact-and-synthetic-control-f8639c408268)